We're on the topic of constrained satisfaction problems and we started with the naive approach
of basically just trying to solve them doing a tree search and then we considered various
ways of improving that.
The most important ones being either doing inference, which means during our search procedure
we replace our current constraint network by an equivalent constraint network that has
the same solutions but has stronger constraints thus ruling out partial assignments early
and pruning branches in our search tree.
The second one being decomposing our constraint network into several smaller networks that
we can then solve independently and then compose the solutions for the smaller networks to
one global solution for the whole network.
The easiest way of doing inference is by just doing forward checking which basically just
means if we assign a value to some variable we rule out all values for the other variables
that break the constraints between them and our consistency is the concept that we're
going to use to improve forward checking immensely.
So the notion of our consistency subsumes forward checking and is based on the notion
that we say a variable in our constraint network is our consistent relative to some other variable
if for every value in the first variable there is guaranteed to be at least one value in
the domain of the other variable such that the two don't break any of the constraints.
Violate, that's the word I was looking for, that don't violate any of our constraints.
Naturally we call the whole network R consistent if every variable is R consistent with respect
to every other variable and clearly that's a consistent way to do things in the sense
of it really does yield an equivalent network and nicely it subsumes forward checking as
well so the only question is how do we go about doing that.
Here's the basic revised algorithm that makes two variables, that makes one variable R
consistent with respect to some other variable.
We literally just check is it already R consistent if no, if there's some value that makes
it not being R consistent we just throw out that value of our domain.
Now of course we can think about here's just one example of how this works.
We have the small network with three nodes.
We want to make V3 R consistent with respect to V2 given this constraint between V2 and
V3 so we just check for every one of those values.
If there's one value over there that violates the constraint for the value one in V3 that
does violate R consistency so we get rid of one.
For two that as well breaks R consistency so we get rid of two as well and now V3 is
R consistent relative to V2.
The most naive way of implementing things this is to basically just write a function AC1 that
enforces R consistency from every variable with respect to every other variable and do
that basically in each recursion step of our backtracking algorithm.
We can do that.
It's not exactly efficient but it's basically the first naive way we would go about doing
this.
The problem here is that it leads to a lot of redundant computations because basically
every time we change any variable we have to reconsider every pair of variables and
make sure that they're still R consistent.
That's kind of a pointless overhead so we have this new algorithm AC3 that tries to
be smarter about this.
What AC3 does is basically just keeps track of all the pairs of variables we already have
considered for R consistency and then we are smart about which pairs of variables we reconsider
every time we delete values in one of our domains.
This is what happens here.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:06:46 Min
Aufnahmedatum
2020-11-02
Hochgeladen am
2020-11-02 10:18:18
Sprache
en-US
Recap: Arc Consistency (Part 2)
Main video on the topic in chapter 10 clip 5.